Waikato
Experiments with Optimal Model Trees
Roselli, Sabino Francesco, Frank, Eibe
Model trees provide an appealing way to perform interpretable machine learning for both classification and regression problems. In contrast to ``classic'' decision trees with constant values in their leaves, model trees can use linear combinations of predictor variables in their leaf nodes to form predictions, which can help achieve higher accuracy and smaller trees. Typical algorithms for learning model trees from training data work in a greedy fashion, growing the tree in a top-down manner by recursively splitting the data into smaller and smaller subsets. Crucially, the selected splits are only locally optimal, potentially rendering the tree overly complex and less accurate than a tree whose structure is globally optimal for the training data. In this paper, we empirically investigate the effect of constructing globally optimal model trees for classification and regression with linear support vector machines at the leaf nodes. To this end, we present mixed-integer linear programming formulations to learn optimal trees, compute such trees for a large collection of benchmark data sets, and compare their performance against greedily grown model trees in terms of interpretability and accuracy. We also compare to classic optimal and greedily grown decision trees, random forests, and support vector machines. Our results show that optimal model trees can achieve competitive accuracy with very small trees. We also investigate the effect on the accuracy of replacing axis-parallel splits with multivariate ones, foregoing interpretability while potentially obtaining greater accuracy.
Evaluation for Regression Analyses on Evolving Data Streams
Sun, Yibin, Gomes, Heitor Murilo, Pfahringer, Bernhard, Bifet, Albert
The paper explores the challenges of regression analysis in evolving data streams, an area that remains relatively underexplored compared to classification. We propose a standardized evaluation process for regression and prediction interval tasks in streaming contexts. Additionally, we introduce an innovative drift simulation strategy capable of synthesizing various drift types, including the less-studied incremental drift. Comprehensive experiments with state-of-the-art methods, conducted under the proposed process, validate the effectiveness and robustness of our approach.
Utilising Deep Learning to Elicit Expert Uncertainty
Falconer, Julia R., Frank, Eibe, Polaschek, Devon L. L., Joshi, Chaitanya
Recent work [ 14 ] has introduced a method for prior elicitation that utilizes records of expert decisions to infer a prior distribution. While this method provides a promising approach to eliciting expert uncertainty, it has only been demonstrated using tabular data, which may not entirely represent the information used by experts to make decisions. In this paper, we demonstrate how analysts can adopt a deep learning approach to utilize the method proposed in [14 ] with the actual information experts use. We provide an overview of deep learning models that can effectively model expert decision-making to elicit distributions that capture expert uncertainty and present an example examining the risk of colon cancer to show in detail how these models can be used.
Multiple Instance Verification
Xu, Xin, Frank, Eibe, Holmes, Geoffrey
We explore multiple-instance verification, a problem setting where a query instance is verified against a bag of target instances with heterogeneous, unknown relevancy. We show that naive adaptations of attention-based multiple instance learning (MIL) methods and standard verification methods like Siamese neural networks are unsuitable for this setting: directly combining state-of-the-art (SOTA) MIL methods and Siamese networks is shown to be no better, and sometimes significantly worse, than a simple baseline model. Postulating that this may be caused by the failure of the representation of the target bag to incorporate the query instance, we introduce a new pooling approach named ``cross-attention pooling'' (CAP). Under the CAP framework, we propose two novel attention functions to address the challenge of distinguishing between highly similar instances in a target bag. Through empirical studies on three different verification tasks, we demonstrate that CAP outperforms adaptations of SOTA MIL methods and the baseline by substantial margins, in terms of both classification accuracy and quality of the explanations provided for the classifications. Ablation studies confirm the superior ability of the new attention functions to identify key instances.
Self-Supervised Pretext Tasks for Alzheimer's Disease Classification using 3D Convolutional Neural Networks on Large-Scale Synthetic Neuroimaging Dataset
Structural magnetic resonance imaging (MRI) studies have shown that Alzheimer's Disease (AD) induces both localised and widespread neural degenerative changes throughout the brain. However, the absence of segmentation that highlights brain degenerative changes presents unique challenges for training CNN-based classifiers in a supervised fashion. In this work, we evaluated several unsupervised methods to train a feature extractor for downstream AD vs. CN classification. Using the 3D T1-weighted MRI data of cognitive normal (CN) subjects from the synthetic neuroimaging LDM100K dataset, lightweight 3D CNN-based models are trained for brain age prediction, brain image rotation classification, brain image reconstruction and a multi-head task combining all three tasks into one. Feature extractors trained on the LDM100K synthetic dataset achieved similar performance compared to the same model using real-world data. This supports the feasibility of utilising large-scale synthetic data for pretext task training. All the training and testing splits are performed on the subject-level to prevent data leakage issues. Alongside the simple preprocessing steps, the random cropping data augmentation technique shows consistent improvement across all experiments.
A Retrospective of the Tutorial on Opportunities and Challenges of Online Deep Learning
Kulbach, Cedric, Cazzonelli, Lucas, Ngo, Hoang-Anh, Le-Nguyen, Minh-Huong, Bifet, Albert
Machine learning algorithms have become indispensable in today's world. They support and accelerate the way we make decisions based on the data at hand. This acceleration means that data structures that were valid at one moment could no longer be valid in the future. With these changing data structures, it is necessary to adapt machine learning (ML) systems incrementally to the new data. This is done with the use of online learning or continuous ML technologies. While deep learning technologies have shown exceptional performance on predefined datasets, they have not been widely applied to online, streaming, and continuous learning. In this retrospective of our tutorial titled Opportunities and Challenges of Online Deep Learning held at ECML PKDD 2023, we provide a brief overview of the opportunities but also the potential pitfalls for the application of neural networks in online learning environments using the frameworks River and Deep-River.
Evaluating The Accuracy of Classification Algorithms for Detecting Heart Disease Risk
Alariyibi, Alhaam, El-Jarai, Mohamed, Maatuk, Abdelsalam
The healthcare industry generates enormous amounts of complex clinical data that make the prediction of disease detection a complicated process. In medical informatics, making effective and efficient decisions is very important. Data Mining (DM) techniques are mainly used to identify and extract hidden patterns and interesting knowledge to diagnose and predict diseases in medical datasets. Nowadays, heart disease is considered one of the most important problems in the healthcare field. Therefore, early diagnosis leads to a reduction in deaths. DM techniques have proven highly effective for predicting and diagnosing heart diseases. This work utilizes the classification algorithms with a medical dataset of heart disease; namely, J48, Random Forest, and Na\"ive Bayes to discover the accuracy of their performance. We also examine the impact of the feature selection method. A comparative and analysis study was performed to determine the best technique using Waikato Environment for Knowledge Analysis (Weka) software, version 3.8.6. The performance of the utilized algorithms was evaluated using standard metrics such as accuracy, sensitivity and specificity. The importance of using classification techniques for heart disease diagnosis has been highlighted. We also reduced the number of attributes in the dataset, which showed a significant improvement in prediction accuracy. The results indicate that the best algorithm for predicting heart disease was Random Forest with an accuracy of 99.24%.
Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies
Yogarajan, Vithya, Dobbie, Gillian, Keegan, Te Taka, Neuwirth, Rostam J.
The benefits and capabilities of pre-trained language models (LLMs) in current and future innovations are vital to any society. However, introducing and using LLMs comes with biases and discrimination, resulting in concerns about equality, diversity and fairness, and must be addressed. While understanding and acknowledging bias in LLMs and developing mitigation strategies are crucial, the generalised assumptions towards societal needs can result in disadvantages towards under-represented societies and indigenous populations. Furthermore, the ongoing changes to actual and proposed amendments to regulations and laws worldwide also impact research capabilities in tackling the bias problem. This research presents a comprehensive survey synthesising the current trends and limitations in techniques used for identifying and mitigating bias in LLMs, where the overview of methods for tackling bias are grouped into metrics, benchmark datasets, and mitigation strategies. The importance and novelty of this survey are that it explores the perspective of under-represented societies. We argue that current practices tackling the bias problem cannot simply be 'plugged in' to address the needs of under-represented societies. We use examples from New Zealand to present requirements for adopting existing techniques to under-represented societies.
Look At Me, No Replay! SurpriseNet: Anomaly Detection Inspired Class Incremental Learning
Lee, Anton, Zhang, Yaqian, Gomes, Heitor Murilo, Bifet, Albert, Pfahringer, Bernhard
Continual learning aims to create artificial neural networks capable of accumulating knowledge and skills through incremental training on a sequence of tasks. The main challenge of continual learning is catastrophic interference, wherein new knowledge overrides or interferes with past knowledge, leading to forgetting. An associated issue is the problem of learning "cross-task knowledge," where models fail to acquire and retain knowledge that helps differentiate classes across task boundaries. A common solution to both problems is "replay," where a limited buffer of past instances is utilized to learn cross-task knowledge and mitigate catastrophic interference. However, a notable drawback of these methods is their tendency to overfit the limited replay buffer. In contrast, our proposed solution, SurpriseNet, addresses catastrophic interference by employing a parameter isolation method and learning cross-task knowledge using an auto-encoder inspired by anomaly detection. SurpriseNet is applicable to both structured and unstructured data, as it does not rely on image-specific inductive biases. We have conducted empirical experiments demonstrating the strengths of SurpriseNet on various traditional vision continual-learning benchmarks, as well as on structured data datasets. Source code made available at https://doi.org/10.5281/zenodo.8247906 and https://github.com/tachyonicClock/SurpriseNet-CIKM-23
7 Best Libraries for Machine Learning Explained - KDnuggets
However, the first machine learning libraries as we know them today, which provide tools and frameworks for implementing and training machine learning models, did not appear until the 1980s and 1990s. One of the earliest machine learning libraries was the Statlib library, which was developed at Carnegie Mellon University in the 1980s. This library provided tools for statistical analysis and machine learning, including support for decision trees and neural networks. Other early machine learning libraries include the Weka library, developed at the University of Waikato in New Zealand in the 1990s, and the LIBSVM library developed at the National Taiwan University in the late 1990s. These libraries provided tools for a variety of machine learning tasks, including classification, regression, and clustering.